Industry 4.0 aims to optimize the manufacturing environment by leveraging new technological advances, such as new sensing capabilities and artificial intelligence. The DRAEM technique has shown state-of-the-art performance for unsupervised classification. The ability to create anomaly maps highlighting areas where defects probably lie can be leveraged to provide cues to supervised classification models and enhance their performance. Our research shows that the best performance is achieved when training a defect detection model by providing an image and the corresponding anomaly map as input. Furthermore, such a setting provides consistent performance when framing the defect detection as a binary or multiclass classification problem and is not affected by class balancing policies. We performed the experiments on three datasets with real-world data provided by Philips Consumer Lifestyle BV.
translated by 谷歌翻译
Quality control is a crucial activity performed by manufacturing companies to ensure their products conform to the requirements and specifications. The introduction of artificial intelligence models enables to automate the visual quality inspection, speeding up the inspection process and ensuring all products are evaluated under the same criteria. In this research, we compare supervised and unsupervised defect detection techniques and explore data augmentation techniques to mitigate the data imbalance in the context of automated visual inspection. Furthermore, we use Generative Adversarial Networks for data augmentation to enhance the classifiers' discriminative performance. Our results show that state-of-the-art unsupervised defect detection does not match the performance of supervised models but can be used to reduce the labeling workload by more than 50%. Furthermore, the best classification performance was achieved considering GAN-based data generation with AUC ROC scores equal to or higher than 0,9898, even when increasing the dataset imbalance by leaving only 25\% of the images denoting defective products. We performed the research with real-world data provided by Philips Consumer Lifestyle BV.
translated by 谷歌翻译
Predicting the political polarity of news headlines is a challenging task that becomes even more challenging in a multilingual setting with low-resource languages. To deal with this, we propose to utilise the Inferential Commonsense Knowledge via a Translate-Retrieve-Translate strategy to introduce a learning framework. To begin with, we use the method of translation and retrieval to acquire the inferential knowledge in the target language. We then employ an attention mechanism to emphasise important inferences. We finally integrate the attended inferences into a multilingual pre-trained language model for the task of bias prediction. To evaluate the effectiveness of our framework, we present a dataset of over 62.6K multilingual news headlines in five European languages annotated with their respective political polarities. We evaluate several state-of-the-art multilingual pre-trained language models since their performance tends to vary across languages (low/high resource). Evaluation results demonstrate that our proposed framework is effective regardless of the models employed. Overall, the best performing model trained with only headlines show 0.90 accuracy and F1, and 0.83 jaccard score. With attended knowledge in our framework, the same model show an increase in 2.2% accuracy and F1, and 3.6% jaccard score. Extending our experiments to individual languages reveals that the models we analyze for Slovenian perform significantly worse than other languages in our dataset. To investigate this, we assess the effect of translation quality on prediction performance. It indicates that the disparity in performance is most likely due to poor translation quality. We release our dataset and scripts at: https://github.com/Swati17293/KG-Multi-Bias for future research. Our framework has the potential to benefit journalists, social scientists, news producers, and consumers.
translated by 谷歌翻译
我们建议使用两层机器学习模型的部署来防止对抗性攻击。第一层确定数据是否被篡改,而第二层解决了域特异性问题。我们探索三组功能和三个数据集变体来训练机器学习模型。我们的结果表明,聚类算法实现了有希望的结果。特别是,我们认为通过将DBSCAN算法应用于图像和白色参考图像之间计算的结构化结构相似性指数测量方法获得了最佳结果。
translated by 谷歌翻译
在这项研究中,我们开发了机器学习模型,以预测废物到燃料植物的未来传感器读数,这将积极控制工厂的运营。我们开发了可预测传感器读数30和60分钟的模型。使用历史数据对模型进行了培训,并根据在特定时间进行的传感器读数进行预测。我们比较了三种类型的模型:(a)仅考虑最后一个预测值的a n \“ aive预测,(b)基于过去的传感器数据进行预测的神经网络(我们考虑了不同的时间窗口尺寸以进行预测)和(c)由我们开发的一组功能创建的梯度增强树回收剂。我们在加拿大的一家废物燃料工厂上开发并测试了模型。我们发现提供的方法(c)提供了最佳结果,而方法(b)提供了不同的结果,并且无法始终如一地超越n \“ aive”。
translated by 谷歌翻译
质量控制是制造业企业进行的至关重要的活动,以确保其产品符合质量标准并避免对品牌声誉的潜在损害。传感器成本下降和连接性使制造业数字化增加。此外,人工智能可实现更高的自动化程度,减少缺陷检查所需的总体成本和时间。这项研究将三种活跃的学习方法(与单一和多个牙齿)与视觉检查进行了比较。我们提出了一种新颖的方法,用于对分类模型的概率校准和两个新的指标,以评估校准的性能而无需地面真相。我们对飞利浦消费者生活方式BV提供的现实数据进行了实验。我们的结果表明,考虑到p = 0.95的阈值,探索的主动学习设置可以将数据标签的工作减少3%至4%,而不会损害总体质量目标。此外,我们表明所提出的指标成功捕获了相关信息,否则仅通过地面真实数据最适合使用的指标可用。因此,所提出的指标可用于估计模型概率校准的质量,而无需进行标签努力以获取地面真相数据。
translated by 谷歌翻译
质量控制是制造公司进行的关键活动,以验证产品一致性的要求和规范。标准化质量控制可确保所有产品在相同的标准下进行评估。传感器和连接成本降低,使得制造的数字化增加,提供了更大的数据可用性。这些数据可用性促使人工智能模型的开发,允许在检查产品时更高的自动化程度和减少偏差。此外,增加的检查速度降低了缺陷检查所需的总成本和时间。在这项研究中,我们比较五个流式机器学习算法,应用于利用飞利浦消费者生活方式BV提供的真实数据的视觉缺陷检查。此外,我们将它们与流在流动的主动学习背景中进行比较,这减少了真实环境中的数据标签工作。我们的研究结果表明,对于最坏情况,主动学习将数据标签努力降低了近15%,同时保持可接受的分类性能。使用机器学习模型进行自动化视野预计将加快高达40%的质量检验。
translated by 谷歌翻译